11 research outputs found

    Fine-grained Activity Classification In Assembly Based On Multi-visual Modalities

    Get PDF
    Assembly activity recognition and prediction help to improve productivity, quality control, and safety measures in smart factories. This study aims to sense, recognize, and predict a worker\u27s continuous fine-grained assembly activities in a manufacturing platform. We propose a two-stage network for workers\u27 fine-grained activity classification by leveraging scene-level and temporal-level activity features. The first stage is a feature awareness block that extracts scene-level features from multi-visual modalities, including red, green blue (RGB) and hand skeleton frames. We use the transfer learning method in the first stage and compare three different pre-trained feature extraction models. Then, we transmit the feature information from the first stage to the second stage to learn the temporal-level features of activities. The second stage consists of the Recurrent Neural Network (RNN) layers and a final classifier. We compare the performance of two different RNNs in the second stage, including the Long Short-Term Memory (LSTM) and the Gated Recurrent Unit (GRU). The partial video observation method is used in the prediction of fine-grained activities. In the experiments using the trimmed activity videos, our model achieves an accuracy of \u3e 99% on our dataset and \u3e 98% on the public dataset UCF 101, outperforming the state-of-the-art models. The prediction model achieves an accuracy of \u3e 97% in predicting activity labels using 50% of the onset activity video information. In the experiments using an untrimmed video with continuous assembly activities, we combine our recognition and prediction models and achieve an accuracy of \u3e 91% in real time, surpassing the state-of-the-art models for the recognition of continuous assembly activities

    Real-Time Tool Detection in Smart Manufacturing using You-Only-Look-Once (YOLO)v5

    No full text
    Computer Vision Plays an Essential Role in Industry 4.0 by Enabling Machinery to Perceive, Analyze, and Control Production Processes. Object Detection, a Computer Vision Technique that Accurately Classifies and Localizes Objects within Images, Has Gained Significant Interest. This Technique Can Be Applied in Various Domains, Including Manufacturing, to Assist in the Detection of Different Tools. in This Paper, You-Only-Look-Once (YOLO)v5 Real-Time Object Detection Technique Has Been Developed and Optimized, to Detect Different Tool Types and their Locations in a Manufacturing Setting. to Train the Neural Network, a Dataset of 3,286 Tool Images from the Internet Has Been Collected and Annotated. to Enhance the Model\u27s Ability in Generalization, Three Augmented Variants of Each Image Have Been Created to Improve Rotation Invariance. the Model\u27s Training Scheme Has Been Further Optimized with Stochastic Gradient Descent after Configuring Different Hyperparameters Such as Learning Rate and Momentum. the Fine-Tuned Model Achieved a Mean Average Accuracy of 98.3 %, Demonstrating the High Precision of the Model in Detecting Different Tool Types and their Locations in Real-Time

    Real-Time Human-Computer Interaction using Eye Gazes

    No full text
    Eye Gaze Emerges as a Unique Channel in Human–computer Interaction (HCI) that Recognizes Human Intention based on Gaze Behavior and Enables Contactless Access to Control and Operate Software Interfaces on Computers. in This Paper, We Propose a Real-Time HCI System using Eye Gaze. First, We Capture and Track Eyes using the Dlib 68-Point Landmark Detector, and Design an Eye Gaze Recognition Model to Recognize Four Types of Eye Gazes. Then, We Construct an Instance Segmentation Model to Recognize and Segment Tools and Parts using the Mask Region-Based Convolutional Neural Network (R-CNN) Method. after that, We Design an HCI Software Interface by Integrating and Visualizing the Proposed Eye Gaze Recognition and Instance Segmentation Models. the HCI System Captures, Tracks, and Recognizes the Eye Gaze through a Red–green–blue (RGB) Webcam, and Provides Responses based on the Detected Eye Gaze, Including the Tool and Part Segmentation, Object Selection and Interface Switching. Experimental Results Show that the Proposed Eye Gaze Recognition Method Achieves an Accuracy of \u3e99 % in a Recommended Distance between the Eyes and the Webcam, and the Instance Segmentation Model Achieves an Accuracy of 99 %. the Experimental Results of the HCI System Operation Demonstrate the Feasibility and Robustness of the Proposed Real-Time HCI System

    Prognostic significance of matrix metalloproteinase-7 in gastric cancer survival: a meta-analysis.

    No full text
    The prognostic role of matrix metalloproteinase-7 in gastric cancer survival has been widely evaluated. However, the results are controversial. We aimed to set up a meta-analysis to reach a conclusion on the prognostic significance of metalloproteinase-7 in gastric cancer survival as well as its association with clinicopathological parameters. We searched popular databases from 1988 until October 2014 to gather eligible peer-reviewed papers addressing the prognostic effect of matrix metalloproteinase-7 in gastric cancer patients' survival. The CASP check list was used for quality appraisal. Pooled hazard ratio (HR) for survival and odds ratio (OR) for association with their 95% confidence interval (CI) were considered as summary measurements. Finally, 1208 gastric cancer patients from nine studies were included in the meta-analysis. Pooled HR estimate for survival was 2.01 (95% CI = 1.62 - 2.50, P < 0.001), which indicated a significant poor prognostic effect for matrix metalloproteinase-7. Sensitivity analysis detected no dominancy for any study. No publication bias was detected according to Egger's and Begg's tests. Clinicopathological assessment revealed that higher matrix metalloproteinase-7 expression is associated with deeper invasion (pooled OR = 3.20; 95% CI = 1.14 - 8.96; P = 0.026), higher TNM stage (pooled OR = 3.67; 95% CI = 2.281-5.99; P<0.001), lymph node metastasis (pooled OR = 2.84; 95% CI = 1.89 - 4.25; P<0.001), and distant metastasis (pooled OR = 3.68; 95% CI = 1.85 - 7.29; P<0.001), but not with histological grade. This meta-analysis indicated a significant poor prognostic effect of matrix metalloproteinase-7 in gastric cancer survival. Additionally it was associated with aggressive tumor phenotype

    Flow diagram for study selection process.

    No full text
    <p>The figure demonstrates how finally included studies were selected from primary search records.</p

    Meta-analysis of MMP7 overexpression association with clinicopathological parameters in included studies.

    No full text
    <p><i>OR</i>: pooled odds ratio; <i>CI</i>: confidence interval; <i>Z</i>: test value for fixed/random effect model; <i>P</i><sub><i>Z</i></sub>: statistical <i>P value</i> for Z test; <i>P</i><sub><i>Q</i></sub>: statistical <i>P value</i> for heterogeneity Q test. <i>I</i><sup><i>2</i></sup><i>%</i>: quantitative metric I<sup>2</sup>test.</p><p><sup>a</sup> Fixed effect model OR (95% CI)</p><p><sup>b</sup> random effect model OR (95% CI)</p><p>Meta-analysis of MMP7 overexpression association with clinicopathological parameters in included studies.</p

    Subgroup meta-analysis results for MMP7 impact on Gastric Cancer survival.

    No full text
    <p><i>HR</i>: pooled hazard ratio; <i>CI</i>: confidence interval; <i>Z</i>: test value for fixed/random effect model; <i>P</i><sub><i>Z</i></sub>: statistical <i>P value</i> for Z test; <i>P</i><sub><i>Q</i></sub>: statistical <i>P value</i> for heterogeneity Q test.</p><p><sup>a</sup> Fixed effect model HR (95% CI)</p><p><sup>b</sup> random effect model HR (95% CI)</p><p>Subgroup meta-analysis results for MMP7 impact on Gastric Cancer survival.</p

    Forrest plot of overall hazard ratio estimate for MMP7 impact on GC survival.

    No full text
    <p>The middle point of the diamond represents the pooled HR and its left and right corners represent 95% CI. Horizontal lines belong to individual studies; the middle point and line length represent the corresponding study`s extracted HR and 95% CI. The area of box tagged with each line represents the individual study`s weight of contribution to the meta-analysis.</p
    corecore